Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 55
Filter
1.
Chinese Journal of Digestive Endoscopy ; (12): 534-538, 2023.
Article in Chinese | WPRIM | ID: wpr-995410

ABSTRACT

Objective:To evaluate deep learning for differentiating invasion depth of colorectal adenomas under image enhanced endoscopy (IEE).Methods:A total of 13 246 IEE images from 3 714 lesions acquired from November 2016 to June 2021 were retrospectively collected in Renmin Hospital of Wuhan University, Shenzhen Hospital of Southern Medical University and the First Hospital of Yichang to construct a deep learning model to differentiate submucosal deep invasion and non-submucosal deep invasion lesions of colorectal adenomas. The performance of the deep learning model was validated in an independent test and an external test. The full test was used to compare the diagnostic performance between 5 endoscopists and the deep learning model. A total of 35 videos were collected from January to June 2021 in Renmin Hospital of Wuhan University to validate the diagnostic performance of the endoscopists with the assistance of deep learning model.Results:The accuracy and Youden index of the deep learning model in image test set were 93.08% (821/882) and 0.86, which were better than those of endoscopists [the highest were 91.72% (809/882) and 0.78]. In video test set, the accuracy and Youden index of the model were 97.14% (34/35) and 0.94. With the assistance of the model, the accuracy of endoscopists was significantly improved [the highest was 97.14% (34/35)].Conclusion:The deep learning model obtained in this study could identify submucosal lesions with deep invasion accurately for colorectal adenomas, and could improve the diagnostic accuracy of endoscopists.

2.
Chinese Journal of Digestive Endoscopy ; (12): 372-378, 2023.
Article in Chinese | WPRIM | ID: wpr-995393

ABSTRACT

Objective:To construct a real-time artificial intelligence (AI)-assisted endoscepic diagnosis system based on YOLO v3 algorithm, and to evaluate its ability of detecting focal gastric lesions in gastroscopy.Methods:A total of 5 488 white light gastroscopic images (2 733 images with gastric focal lesions and 2 755 images without gastric focal lesions) from June to November 2019 and videos of 92 cases (288 168 clear stomach frames) from May to June 2020 at the Digestive Endoscopy Center of Renmin Hospital of Wuhan University were retrospectively collected for AI System test. A total of 3 997 prospective consecutive patients undergoing gastroscopy at the Digestive Endoscopy Center of Renmin Hospital of Wuhan University from July 6, 2020 to November 27, 2020 and May 6, 2021 to August 2, 2021 were enrolled to assess the clinical applicability of AI System. When AI System recognized an abnormal lesion, it marked the lesion with a blue box as a warning. The ability to identify focal gastric lesions and the frequency and causes of false positives and false negatives of AI System were statistically analyzed.Results:In the image test set, the accuracy, the sensitivity, the specificity, the positive predictive value and the negative predictive value of AI System were 92.3% (5 064/5 488), 95.0% (2 597/2 733), 89.5% (2 467/ 2 755), 90.0% (2 597/2 885) and 94.8% (2 467/2 603), respectively. In the video test set, the accuracy, the sensitivity, the specificity, the positive predictive value and the negative predictive value of AI System were 95.4% (274 792/288 168), 95.2% (109 727/115 287), 95.5% (165 065/172 881), 93.4% (109 727/117 543) and 96.7% (165 065/170 625), respectively. In clinical application, the detection rate of local gastric lesions by AI System was 93.0% (6 830/7 344). A total of 514 focal gastric lesions were missed by AI System. The main reasons were punctate erosions (48.8%, 251/514), diminutive xanthomas (22.8%, 117/514) and diminutive polyps (21.4%, 110/514). The mean number of false positives per gastroscopy was 2 (1, 4), most of which were due to normal mucosa folds (50.2%, 5 635/11 225), bubbles and mucus (35.0%, 3 928/11 225), and liquid deposited in the fundus (9.1%, 1 021/11 225).Conclusion:The application of AI System can increase the detection rate of focal gastric lesions.

3.
Chinese Journal of Digestive Endoscopy ; (12): 293-297, 2023.
Article in Chinese | WPRIM | ID: wpr-995384

ABSTRACT

Objective:To assess the diagnostic efficacy of upper gastrointestinal endoscopic image assisted diagnosis system (ENDOANGEL-LD) based on artificial intelligence (AI) for detecting gastric lesions and neoplastic lesions under white light endoscopy.Methods:The diagnostic efficacy of ENDOANGEL-LD was tested using image testing dataset and video testing dataset, respectively. The image testing dataset included 300 images of gastric neoplastic lesions, 505 images of non-neoplastic lesions and 990 images of normal stomach of 191 patients in Renmin Hospital of Wuhan University from June 2019 to September 2019. Video testing dataset was from 83 videos (38 gastric neoplastic lesions and 45 non-neoplastic lesions) of 78 patients in Renmin Hospital of Wuhan University from November 2020 to April 2021. The accuracy, the sensitivity and the specificity of ENDOANGEL-LD for image testing dataset were calculated. The accuracy, the sensitivity and the specificity of ENDOANGEL-LD in video testing dataset for gastric neoplastic lesions were compared with those of four senior endoscopists.Results:In the image testing dataset, the accuracy, the sensitivity, the specificity of ENDOANGEL-LD for gastric lesions were 93.9% (1 685/1 795), 98.0% (789/805) and 90.5% (896/990) respectively; while the accuracy, the sensitivity and the specificity of ENDOANGEL-LD for gastric neoplastic lesions were 88.7% (714/805), 91.0% (273/300) and 87.3% (441/505) respectively. In the video testing dataset, the sensitivity [100.0% (38/38) VS 85.5% (130/152), χ2=6.220, P=0.013] of ENDOANGEL-LD was higher than that of four senior endoscopists. The accuracy [81.9% (68/83) VS 72.0% (239/332), χ2=3.408, P=0.065] and the specificity [ 66.7% (30/45) VS 60.6% (109/180), χ2=0.569, P=0.451] of ENDOANGEL-LD were comparable with those of four senior endoscopists. Conclusion:The ENDOANGEL-LD can accurately detect gastric lesions and further diagnose neoplastic lesions to help endoscopists in clinical work.

4.
Chinese Journal of Digestive Endoscopy ; (12): 206-211, 2023.
Article in Chinese | WPRIM | ID: wpr-995376

ABSTRACT

Objective:To analyze the cost-effectiveness of a relatively mature artificial intelligence (AI)-assisted diagnosis and treatment system (ENDOANGEL) for gastrointestinal endoscopy in China, and to provide objective and effective data support for hospital acquisition decision.Methods:The number of gastrointestinal endoscopy procedures at the Endoscopy Center of Renmin Hospital of Wuhan University from January 2017 to December 2019 were collected to predict the procedures of gastrointestinal endoscopy during the service life (10 years) of ENDOANGEL. The net present value, payback period and average rate of return were used to analyze the cost-effectiveness of ENDOANGEL.Results:The net present value of an ENDOANGEL in the expected service life (10 years) was 6 724 100 yuan, the payback period was 1.10 years, and the average rate of return reached 147.84%.Conclusion:ENDOANGEL shows significant economic benefits, and it is reasonable for hospitals to acquire mature AI-assisted diagnosis and treatment system for gastrointestinal endoscopy.

5.
Chinese Journal of Digestive Endoscopy ; (12): 109-114, 2023.
Article in Chinese | WPRIM | ID: wpr-995366

ABSTRACT

Objective:To construct an artificial intelligence-assisted diagnosis system to recognize the characteristics of Helicobacter pylori ( HP) infection under endoscopy, and evaluate its performance in real clinical cases. Methods:A total of 1 033 cases who underwent 13C-urea breath test and gastroscopy in the Digestive Endoscopy Center of Renmin Hospital of Wuhan University from January 2020 to March 2021 were collected retrospectively. Patients with positive results of 13C-urea breath test (which were defined as HP infertion) were assigned to the case group ( n=485), and those with negative results to the control group ( n=548). Gastroscopic images of various mucosal features indicating HP positive and negative, as well as the gastroscopic images of HP positive and negative cases were randomly assigned to the training set, validation set and test set with at 8∶1∶1. An artificial intelligence-assisted diagnosis system for identifying HP infection was developed based on convolutional neural network (CNN) and long short-term memory network (LSTM). In the system, CNN can identify and extract mucosal features of endoscopic images of each patient, generate feature vectors, and then LSTM receives feature vectors to comprehensively judge HP infection status. The diagnostic performance of the system was evaluated by sensitivity, specificity, accuracy and area under receiver operating characteristic curve (AUC). Results:The diagnostic accuracy of this system for nodularity, atrophy, intestinal metaplasia, xanthoma, diffuse redness + spotty redness, mucosal swelling + enlarged fold + sticky mucus and HP negative features was 87.5% (14/16), 74.1% (83/112), 90.0% (45/50), 88.0% (22/25), 63.3% (38/60), 80.1% (238/297) and 85.7% (36 /42), respectively. The sensitivity, specificity, accuracy and AUC of the system for predicting HP infection was 89.6% (43/48), 61.8% (34/55), 74.8% (77/103), and 0.757, respectively. The diagnostic accuracy of the system was equivalent to that of endoscopist in diagnosing HP infection under white light (74.8% VS 72.1%, χ2=0.246, P=0.620). Conclusion:The system developed in this study shows noteworthy ability in evaluating HP status, and can be used to assist endoscopists to diagnose HP infection.

6.
Chinese Journal of Digestive Endoscopy ; (12): 965-971, 2022.
Article in Chinese | WPRIM | ID: wpr-995348

ABSTRACT

Objective:To develop an artificial intelligence-based system for measuring the size of gastrointestinal lesions under white light endoscopy in real time.Methods:The system consisted of 3 models. Model 1 was used to identify the biopsy forceps and mark the contour of the forceps in continuous pictures of the video. The results of model 1 were submitted to model 2 and classified into open and closed forceps. And model 3 was used to identify the lesions and mark the boundary of lesions in real time. Then the length of the lesions was compared with the contour of the forceps to calculate the size of lesions. Dataset 1 consisted of 4 835 images collected retrospectively from January 1, 2017 to November 30, 2019 in Renmin Hospital of Wuhan University, which were used for model training and validation. Dataset 2 consisted of images collected prospectively from December 1, 2019 to June 4, 2020 at the Endoscopy Center of Renmin Hospital of Wuhan University, which were used to test the ability of the model to segment the boundary of the biopsy forceps and lesions. Dataset 3 consisted of 302 images of 151 simulated lesions, each of which included one image of a larger tilt angle (45° from the vertical line of the lesion) and one image of a smaller tilt angle (10° from the vertical line of the lesion) to test the ability of the model to measure the lesion size with the biopsy forceps in different states. Dataset 4 was a video test set, which consisted of prospectively collected videos taken from the Endoscopy Center of Renmin Hospital of Wuhan University from August 5, 2019 to September 4, 2020. The accuracy of model 1 in identifying the presence or absence of biopsy forceps, model 2 in classifying the status of biopsy forceps (open or closed) and model 3 in identifying the presence or absence of lesions were observed with the results of endoscopist review or endoscopic surgery pathology as the gold standard. Intersection over union (IoU) was used to evaluate the segmentation effect of biopsy forceps in model 1 and lesion segmentation effect in model 3, and the absolute error and relative error were used to evaluate the ability of the system to measure lesion size.Results:(1)A total of 1 252 images were included in dataset 2, including 821 images of forceps (401 images of open forceps and 420 images of closed forceps), 431 images of non-forceps, 640 images of lesions and 612 images of non-lesions. Model 1 judged 433 images of non-forceps (430 images were accurate) and 819 images of forceps (818 images were accurate), and the accuracy was 99.68% (1 248/1 252). Based on the data of 818 images of forceps to evaluate the accuracy of model 1 on judging the segmentation effect of biopsy forceps lobe, the mean IoU was 0.91 (95% CI: 0.90-0.92). The classification accuracy of model 2 was evaluated by using 818 forceps pictures accurately judged by model 1. Model 2 judged 384 open forceps pictures (382 accurate) and 434 closed forceps pictures (416 accurate), and the classification accuracy of model 2 was 97.56% (798/818). Model 3 judged 654 images containing lesions (626 images were accurate) and 598 images of non-lesions (584 images were accurate), and the accuracy was 96.65% (1 210/1 252). Based on 626 images of lesions accurately judged by model 3, the mean IoU was 0.86 (95% CI: 0.85-0.87). (2) In dataset 3, the mean absolute error of systematic lesion size measurement was 0.17 mm (95% CI: 0.08-0.28 mm) and the mean relative error was 3.77% (95% CI: 0.00%-10.85%) when the tilt angle of biopsy forceps was small. The mean absolute error of systematic lesion size measurement was 0.17 mm (95% CI: 0.09-0.26 mm) and the mean relative error was 4.02% (95% CI: 2.90%-5.14%) when the biopsy forceps was tilted at a large angle. (3) In dataset 4, a total of 780 images of 59 endoscopic examination videos of 59 patients were included. The mean absolute error of systematic lesion size measurement was 0.24 mm (95% CI: 0.00-0.67 mm), and the mean relative error was 9.74% (95% CI: 0.00%-29.83%). Conclusion:The system could measure the size of endoscopic gastrointestinal lesions accurately and may improve the accuracy of endoscopists.

7.
Chinese Journal of Digestion ; (12): 42-49, 2022.
Article in Chinese | WPRIM | ID: wpr-934133

ABSTRACT

Objective:To analyze the expression of circular RNA circ_0008274 in cetuximab-resistant colorectal cancer cells using bioinformatics technology and to explore its involvement in the development of cetuximab resistance.Methods:Five concentrations of cetuximab (10, 50, 100, 150, 200 nmol/L) were set. Cetuximab-resistant cells DiFi-R and Caco-2-R were screened out and established by concentration increasing method using colorectal cancer cells DiFi and Caco-2. The expression of circ_0008274 in DiFi-R and Caco-2-R cells was detected by reverse transcription-polymerase chain reaction(RT-PCR). The interaction and regulation between circ_0008274 and microRNA(miR)-140-3p were analyzed by double-luciferase reporter assay. The highly expressed gene SMARCC1 related to cetuximab resistance was determined by Western blotting. Circ_0008274 in DiFi-R and Caco-2-R cells were knocked out with small interfering RNA si-circ_0008274 transfection. After knock out, the differences in the colony formation and cell proliferation in DiFi-R and Caco-2-R cells were compared. MiR-140-3p mimic and blank control miR were transfected into DiFi-R and Caco-2-R cells. After transfection the difference in cell proliferation between transfected with miR-140-3p mimic and blank control miR in DiFi-R and Caco-2-R cells were analyzed. After Caco-2-R cell was knocked out with si-circ_0008274, the changes of SMARCC1 protein expression rescued by pcDNA3.1 SMARCC1 and cell viability were analyzed. The tumor specimens of 15 colorectal cancer patients hospitalized in Renmin Hospital of Wuhan University from March 2019 to August 2020 were included. According to the treatment effect, the patients were divided into sensitive group (11 cases) and drug-resistant group (4 cases). The relative expression levels of circ_0008274, downstream SMARCC1and miR-140-3p in colorectal cancer tissues in the two groups were detected by RT-PCR. Independent sample t test was used for statistical analysis. Results:The level of circ_0008274 in DiFi-R cells was 2.33±0.12 times of that of DiFi cells, while the level in Caco-2-R was (2.92±0.42) times of that of Caco-2 cells, and the differences were statistically significant ( t=19.97 and 7.80, both P<0.05). The results of double-luciferase reporter showed that after miR-140-3p mimic combined with wild-type circ_0008274, the relative fluorescence intensity was lower than before (0.28±0.04 vs. 1.00±0.00), and the difference was statistically significant ( t=-30.71, P=0.001). The expression of SMARCC1 protein in DiFi-R and Caco-2-R cells was significantly increased, the expression at protein level was higher than that of DiFi and Caco-2 cells (2.22±0.36 vs. 0.61±0.17, 0.85±0.11 vs. 0.35±0.08), and the differences were statistically significant ( t=6.23 and 6.32, both P<0.01). After circ_0008274 was knocked out, the numbers of colony formation of DiFi-R and Caco-2-R cells were both lower than those of before knockout (36.67±4.04 vs. 66.00±9.54, 17.35±4.04 vs. 52.33±8.02), the relative active cell ratios after interventing by 10, 50, 100, 150 and 200 nmol/L cetuximab were also lower than those of before knockout (DiFi-R cells: (73.75±2.75)% vs. (88.10±2.48)%, (56.50±6.66)% vs. (75.15±6.03)%, (35.75±5.32)% vs. (59.63±6.67)%, (24.25±3.30)% vs. (52.40±6.71)%, (6.25±2.75)% vs. (48.60±5.38)%; Caco-2-R cells: (63.74±5.25)% vs. (85.76±4.79)%, (56.50±4.20)% vs.(83.50±3.90)%, (46.00±2.94)% vs. (80.00±6.05)%, (35.30±5.56)% vs. (68.30±4.57)%, (12.25±7.37)% vs. (62.40±7.51)%), and the differences were statistically significant ( t=4.90, 6.71, -7.75, -4.16, -5.60, -7.53, -14.02, -6.19, -8.33, -10.10, -9.17 and -9.56, all P<0.01). After transfecting with miR-140-3p mimic, the relative active cell ratios of DiFi-R and Caco-2-R cells interventing by 10, 50, 100, 150 and 200 nmol/L cetuximab were both lower than those transfected with blank control miR (DiFi-R cells: (71.55±4.97)% vs. (85.90±2.66)%, (51.58±3.91)% vs. (74.95±6.35)%, (41.23±8.84)% vs. (58.43±7.05)%, (28.60±5.26)% vs. (53.75±5.65)%, (18.90±5.13)% vs. (51.30±3.30)%; Caco-2-R cells: (61.75±2.22)% vs. (90.10±1.41)%, (53.25±4.17)% vs. (86.18±2.69)%, (46.38±4.55)% vs. (77.75±6.70)%, (36.10±8.76)% vs. (70.15±4.18)%, (24.25±2.63)% vs. (65.10±7.62)%), and the differences were statistically significant ( t=-5.09, -6.47, -3.05, -6.28, -10.30, -21.48, -12.83, -8.01, -6.79 and -10.12, all P<0.01). After circ_0008274 was knocked out, the SMARCC1 protein level of Caco-2-R cells rescued by pcDNA3.1 SMARCC1 was higher than that of before rescue (0.63±0.19 vs. 0.09±0.03), and the relative active cell ratios after interventing by 10, 50, 100, 150 and 200 nmol/L cetuximab were also higher than that of before rescue ((93.10±3.56)% vs. (83.83±3.97)%, (83.28±4.26)% vs. (60.90±7.02)%, (61.83±2.12)% vs. (50.10±5.59)%, (53.20±3.74)% vs. (40.50±3.42)%, (46.20±4.08)% vs. (30.80±4.82)%), and the differences were statistically significant( t=3.55, 3.52, 5.44, 3.87, 4.64 and 4.88, all P<0.01). The relative expression levels of circ_0008274 and downstream SMARCC1 of colon cancer tissues in the drug-resistant group were higher than those in the sensitive group (6.45±1.32 vs. 2.26±1.39, 12.53±1.60 vs. 3.82±1.56), and the relative expression level of miR-140-3p was lower than that in the sensitive group (3.91±1.25 vs. 7.43±2.23), and the differences were statistically significant ( t=5.22, 9.51, -2.93, all P<0.01). Conclusions:Circular RNA circ_0008274 is highly expressed in colorectal cancer tissues and cetuximab resistant cells, interacts and inhibits miR-140-3p expression, up-regulates SMARCC1, and participates in the occurrence of cetuximab resistance. PcDNA3.1 SMARCC1 rescue can block the sensitization effect of si-circ_0008274 on cetuximab, and can significantly increase cetuximab resistance of colorectal cancer cells.

8.
Chinese Journal of Digestive Endoscopy ; (12): 295-300, 2022.
Article in Chinese | WPRIM | ID: wpr-934107

ABSTRACT

Objective:To construct a deep learning-based artificial intelligence endoscopic ultrasound (EUS) bile duct scanning substation system to assist endoscopists in learning multi-station imaging and improve their operation skills.Methods:A total of 522 EUS videos in Renmin Hospital of Wuhan University and Wuhan Union Hospital from May 2016 to October 2020 were collected, and images were captured from these videos, including 3 000 white light images and 31 003 EUS images from Renmin Hospital of Wuhan University, and 799 EUS images from Wuhan Union Hospital. The pictures were divided into training set and test set in the EUS bile duct scanning system. The system included filtering model of white light gastroscopy images (model 1), distinguishing model of standard station images and non-standard station images (model 2) and substation model of EUS bile duct scanning standard images (model 3), which were used to classify the standard images into liver window, stomach window, duodenal bulb window, and duodenal descending window. Then 110 pictures were randomly selected from the test set for a man-machine competition to compare the accuracy of multi-station imaging by experts, advanced endoscopists and the artificial intelligence model.Results:The accuracies of model 1 and model 2 were 100.00% (1 200/1 200) and 93.36% (2 938/3 147) respectively. Those of model 3 on the internal validation dataset in each classification were 97.23% (1 687/1 735) in liver window, 96.89% (1 681/1 735) in stomach window, 98.73% (1 713/1 735) in duodenal bulb window, and 97.18% (1 686/1 735) in duodenal descending window. And those on the external validation dataset were 89.61% (716/799) in liver window, 92.74% (741/799) in stomach window, 90.11% (720/799) in duodenal bulb window, and 92.24% (737/799) in duodenal descending window. In the man-machine competition, the accuracy of the substation model was 89.09% (98/110), which was higher than that of senior endoscopists [85.45% (94/110), 74.55% (82/110), and 85.45% (94/110)] and close to the level of experts [92.73% (102/110) and 90.00% (99/110)].Conclusion:The deep learning-based EUS bile duct scanning system constructed in the current study can assist endoscopists to perform standard multi-station scanning in real time more accurately and improve the completeness and quality of EUS.

9.
Chinese Journal of Digestive Endoscopy ; (12): 133-138, 2022.
Article in Chinese | WPRIM | ID: wpr-934086

ABSTRACT

Objective:To evaluate the intelligent gastrointestinal endoscopy quality control system in gastroscopy.Methods:Fourteen endoscopists from Renmin Hospital of Wuhan University were assigned to the quality-control group and the control group by the random number table. In the pre-quality-control stage (from April 20, 2019 to May 31, 2019), data of gastroscopies performed by the enrolled endoscopists were collected. In the training stage (June 1 to 30, 2019), the quality-control group was trained in quality control knowledge and the instructions of intelligent gastrointestinal endoscopy quality control system; but the control group was only trained in quality control knowledge. In the post-quality-control stage (from July 1, 2019 to August 20, 2019), a quality report was submitted weekly to the endoscopists in the quality-control group with a review and feedback, while the control group had no quality control report. Simultaneously, the gastroscopies performed by the enrolled endoscopists were collected during the period. Changes of precancerous lesion detection rate in the two groups were compared.Results:Seven endoscopists were assigned to each group. A total of 3 446 gastroscopies were included in the pre-quality-control stage ( n=1 651, including 753 cases in the quality-control group and 898 cases in the control group) and post-quality-control stage (n=1 795, including 892 cases in the quality-control group and 903 cases in the control group). The detection rate of precancerous lesions in the quality-control group increased by 3.6% [3.3% (29/892) VS 6.9% (52/753), χ2=11.65, P<0.01], while that of the control group increased by 0.4% [3.3% (30/903) VS 3.7% (33/898), χ2=0.17, P=0.684]. Conclusion:The intelligent gastrointestinal endoscopy quality control system with a review and feedback could monitor and improve the quality of gastroscopy.

10.
Chinese Journal of Digestion ; (12): 464-469, 2022.
Article in Chinese | WPRIM | ID: wpr-958335

ABSTRACT

Objective:To construct a deep learning-based diagnostic system for gastrointestinal submucosal tumor (SMT) under endoscopic ultrasonography (EUS), so as to help endoscopists diagnose SMT.Methods:From January 1, 2019 to December 15, 2021, at the Digestive Endoscopy Center of Renmin Hospital of Wuhan University, 245 patients with SMT confirmed by pathological diagnosis who underwent EUS and endoscopic submucosal dissection were enrolled. A total of 3 400 EUS images were collected. Among the images, 2 722 EUS images were used for training of lesion segmentation model, while 2 209 EUS images were used for training of stromal tumor and leiomyoma classification model; 283 and 191 images were selected as independent test sets to evaluate lesion segmentation model and classification model, respectively. Thirty images were selected as an independent data set for human-machine competition to compare the lesion classification accuracy between lesion classification models and 6 endoscopists. The performance of the segmentation model was evaluated by indexes such as Intersection-over-Union and Dice coefficient. The performance of the classification model was evaluated by accuracy. Chi-square test was used for statistical analysis.Results:The average Intersection-over-Union and Dice coefficient of lesion segmentation model were 0.754 and 0.835, respectively, and the accuracy, recall and F1 score were 95.2%, 98.9% and 97.0%, respectively. Based on the lesion segmentation, the accuracy of classification model increased from 70.2% to 92.1%. The results of human-machine competition showed that the accuracy of classification model in differential diagnosis of stromal tumor and leiomyoma was 86.7% (26/30), which was superior to that of 4 out of the 6 endoscopists(56.7%, 17/30; 56.7%, 17/30; 53.3%, 16/30; 60.0%, 18/30; respectively), and the differences were statistically significant( χ2=7.11, 7.36, 8.10, 6.13; all P<0.05). There was no significant difference between the accuracy of the other 2 endoscopists(76.7%, 23/30; 73.3%, 22/30; respectively) and model(both P<0.05). Conclusion:This system could be used for the auxiliary diagnosis of SMT under ultrasonic endoscope in the future, and to provide a powerful evidence for the selection of subsequent treatment decisions.

11.
Chinese Journal of Digestion ; (12): 433-438, 2022.
Article in Chinese | WPRIM | ID: wpr-958330

ABSTRACT

Objective:To compare the ability of deep convolutional neural network-crop (DCNN-C) and deep convolutional neural network-whole (DCNN-W), 2 artificial intelligence systems based on different training methods to dignose early gastric cancer (EGC) diagnosis under magnifying image-enhanced endoscopy (M-IEE).Methods:The images and video clips of EGC and non-cancerous lesions under M-IEE under narrow band imaging or blue laser imaging mode were retrospectively collected in the Endoscopy Center of Renmin Hospital of Wuhan University, for the training set and test set for DCNN-C and DCNN-W. The ability of DCNN-C and DCNN-W in EGC identity in image test set were compared. The ability of DCNN-C, DCNN-W and 3 senior endoscopists (average performance) in EGC identity in video test set were also compared. Paired Chi-squared test and Chi-squared test were used for statistical analysis. Inter-observer agreement was expressed as Cohen′s Kappa statistical coefficient (Kappa value).Results:In the image test set, the accuracy, sensitivity, specificity and positive predictive value of DCNN-C in EGC diagnosis were 94.97%(1 133/1 193), 97.12% (202/208), 94.52% (931/985), and 78.91%(202/256), respectively, which were higher than those of DCNN-W(86.84%, 1 036/1 193; 92.79%, 193/208; 85.58%, 843/985 and 57.61%, 193/335), and the differences were statistically significant ( χ2=4.82, 4.63, 61.04 and 29.69, P=0.028, =0.035, <0.001 and <0.001). In the video test set, the accuracy, specificity and positive predictive value of senior endoscopists in EGC diagnosis were 67.67%, 60.42%, and 53.37%, respectively, which were lower than those of DCNN-C (93.00%, 92.19% and 87.18%), and the differences were statistically significant ( χ2=20.83, 16.41 and 11.61, P<0.001, <0.001 and =0.001). The accuracy, specificity and positive predictive value of DCNN-C in EGC diagnosis were higher than those of DCNN-W (79.00%, 70.31% and 64.15%, respectively), and the differences were statistically significant ( χ2=7.04, 8.45 and 6.18, P=0.007, 0.003 and 0.013). There were no significant differences in accuracy, specificity and positive predictive value between senior endoscopists and DCNN-W in EGC diagnosis (all P>0.05). The sensitivity of senior endoscopists, DCNN-W and DCNN-C in EGC diagnosis were 80.56%, 94.44%, and 94.44%, respectively, and the differences were not statistically significant (all P>0.05). The results of the agreement analysis showed that the agreement between senior endoscopists and the gold standard was fair to moderate (Kappa=0.259, 0.532, 0.329), the agreement between DCNN-W and the gold standard was moderate (Kappa=0.587), and the agreement between DCNN-C and the gold standard was very high (Kappa=0.851). Conclusion:When the training set is the same, the ability of DCNN-C in EGC diagnosis is better than that of DCNN-W and senior endoscopists, and the diagnostic level of DCNN-W is equivalent to that of senior endoscopists.

12.
Chinese Journal of Digestive Endoscopy ; (12): 707-713, 2022.
Article in Chinese | WPRIM | ID: wpr-958309

ABSTRACT

Objective:To evaluate the Kyoto gastritis score for diagnosing Helicobacter pylori ( HP) infection in Chinese people. Methods:A total of 902 cases who underwent 13C-urea breath test and gastroscopy at the same time at the Digestive Endoscopy center of Renmin Hospital of Wuhan University from January 2020 to December 2020 were studied retrospectively, including 345 cases of HP-positive and 557 of HP-negative. The differences of mucosal features and Kyoto gastritis score between HP-positive and HP-negative patients were analyzed. A receiver operating characteristic curve was plotted to predict HP infection by Kyoto gastritis score. Results:Compared with HP-negative patients, nodules [8.1% (28/345) VS 0.2% (1/557), χ2=86.29, P<0.001], diffuse redness [47.8% (165/345) VS 6.6% (37/557), χ2=413.63, P<0.001], atrophy [27.8% (96/345) VS 13.8% (77/557), χ2=52.90, P<0.001] and fold enlargement [69.0% (238/345) VS 36.6% (204/557), χ2=175.38, P<0.001] occurred more frequently in HP-positive patients. For predicting HP infection, nodules showed the highest specificity [99.8% (556/557)] and positive predictive value [96.6% (28/29)], diffuse redness showed the largest area under the receiver operating characteristic curve (AUC) of 0.707, and fold enlargement showed the highest sensitivity [69.0% (238/345)] and negative predictive value [76.7% (353/460)]. The Kyoto gastritis score of HP-positive patients was higher than that of HP-negative patients [2 (1, 2) VS 0 (0, 1), Z=20.82, P<0.001]. Furthermore, at an optimal threshold of 2, the AUC of the Kyoto gastritis score for predicting HP infection was 0.779. Conclusion:Nodules, diffuse redness, atrophy and fold enlargement under gastroscopy can suggest positive of HP infection, and the Kyoto gastritis score≥2 is sufficient reference to diagnose HP positive.

13.
Chinese Journal of Digestive Endoscopy ; (12): 538-541, 2022.
Article in Chinese | WPRIM | ID: wpr-958290

ABSTRACT

Objective:To evaluate the impact of artificial intelligence (AI) system on the diagnosis rate of precancerous state of gastric cancer.Methods:A single center self-controlled study was conducted under the premise that such factors were controlled as mainframe and model of the endoscope, operating doctor, season and climate, and pathology was taken as the gold standard. The diagnosis rate of precancerous state of gastric cancer, including atrophic gastritis (AG) and intestinal metaplasia (IM) in traditional gastroscopy (from September 1, 2019 to November 30, 2019) and AI assisted endoscopy (from September 1, 2020 to November 15, 2020) in the Eighth Hospital of Wuhan was statistically analyzed and compared, and the subgroup analysis was conducted according to the seniority of doctors.Results:Compared with traditional gastroscopy, AI system could significantly improve the diagnosis rate of AG [13.3% (38/286) VS 7.4% (24/323), χ2=5.689, P=0.017] and IM [33.9% (97/286) VS 26.0% (84/323), χ2=4.544, P=0.033]. For the junior doctors (less than 5 years of endoscopic experience), AI system had a more significant effect on the diagnosis rate of AG [11.9% (22/185) VS 5.8% (11/189), χ2=4.284, P=0.038] and IM [30.3% (56/185) VS 20.6% (39/189), χ2=4.580, P=0.032]. For the senior doctors (more than 10 years of endoscopic experience), although the diagnosis rate of AG and IM increased slightly, the difference was not statistically significant. Conclusion:AI system shows the potential to improve the diagnosis rate of precancerous state of gastric cancer, especially for junior endoscopists, and to reduce missed diagnosis of early gastric cancer.

14.
Chinese Journal of Digestion ; (12): 606-612, 2021.
Article in Chinese | WPRIM | ID: wpr-912216

ABSTRACT

Objective:To develop early gastric cancer (EGC) detection system of magnifying blue laser imaging (ME-BLI) model and magnifying narrow-band imaging (ME-NBI) model based on deep convolutional neural network, to compare the performance differences of the two models and to explore the effects of training methods on the accuracy.Methods:The images of benign gastric lesions and EGC under ME-BLI and ME-NBI were respectively collected. A total of five data sets and three test sets were collected. Data set 1 included 2 024 noncancerous lesions and 452 EGC images under ME-BLI. Data set 2 included 2 024 noncancerous lesions and 452 EGC images under ME-NBI. Data set 3 was the combination of data set 1 and 2 (a total of 4 048 noncancerous lesions and 904 EGC images under ME-BLI and ME-NBI). Data set 4: on the basis of data set 2, another 62 noncancerous lesions and 2 305 EGC images under ME-NBI were added (2 086 noncancerous lesions and 2 757 EGC images under ME-NBI). Data set 5: on the basis of data set 3, another 62 noncancerous lesions and 2 305 EGC images under ME-NBI were added(4 110 noncancerous lesions and 3 209 EGC images under ME-NBI and ME-BLI). Test set A included 422 noncancerous lesions and 197 EGC images under ME-BLI. Test set B included 422 noncancerous lesions and 197 EGC images under ME-NBI. Test set C was the combination of test set A and B (844 noncancerous and 394 EGC images under ME-BLI and ME-NBI). Five models were constructed according to these five data sets respectively and their performance was evaluated in the three test sets. Per-lesion video was collected and used to compare the performance of deep convolutional neural network models under ME-BLI and ME-NBI for the detection of EGC in clinical environment, and compared with four senior endoscopy doctors. The primary endpoint was the diagnostic accuracy of EGG, sensitivity and specificity. Chi-square test was used for statistical analysis.Results:The performance of model 1 was the best in test set A with the accuracy, sensitivity and specificity of 76.90% (476/619), 63.96% (126/197) and 82.94% (350/422), respectively. The performance of model 2 was the best in test set B with the accuracy, sensitivity and specificity of 86.75% (537/619), 92.89% (183/197) and 83.89% (354/422), respectively. The performance of model 3 was the best in test set B with the accuracy, sensitivity and specificity of 86.91% (538/619), 84.26% (166/197) and 88.15% (372/422), respectively. The performance of model 4 was the best in test set B with the accuracy, sensitivity and specificity of 85.46% (529/619), 95.43% (188/197) and 80.81% (341/422), respectively. The performance of model 5 was the best in test set B, with the accuracy, sensitivity and specificity of 83.52% (517/619), 96.95% (191/197) and 77.25% (326/422), respectively. In terms of image recognition of EGC, the accuracy of models 2 to 5 was higher than that of model 1, and the differences were statistically significant ( χ2=147.90, 149.67, 134.20 and 115.30, all P<0.01). The sensitivity and specificity of models 2 and 3 were higher than those of model 1, the specificity of model 2 was lower than that of model 3, and the differences were statistically significant ( χ2=131.65, 64.15, 207.60, 262.03 and 96.73, all P < 0.01). The sensitivity of models 4 and 5 was higher than those of models 1 to 3, and the specificity of models 4 and 5 was lower than those of models 1 to 3, and the differences were statistically significant ( χ2=151.16, 165.49, 71.35, 112.47, 132.62, 153.14, 176.93, 74.62, 14.09, 15.47, 6.02 and 5.80, all P<0.05). The results of video test based on lesion showed that the average accuracy of doctors 1 to 4 was 68.16%. And the accuracy of models 1 to 5 was 69.47% (66/95), 69.47% (66/95), 70.53% (67/95), 76.84% (73/95) and 80.00% (76/95), respectively. There were no significant differences in the accuracy among models 1 to 5 and between models 1 to 5 and doctors 1 to 4 (all P>0.05). Conclusions:ME-BLI EGC recognition model based on deep learning has good accuracy, but the diagnostic effecacy is sligntly worse than that of ME-NBI model. The effects of EGC recognition model of ME-NBI combined with ME-BLI is better than that of a single model. A more sensitive ME-NBI model can be obtained by increasing the number of ME-NBI images, especially the images of EGG, but the specificity is worse.

15.
Chinese Journal of Digestive Endoscopy ; (12): 801-805, 2021.
Article in Chinese | WPRIM | ID: wpr-912176

ABSTRACT

Objective:To evaluate deep learning in improving the diagnostic rate of adenomatous and non-adenomatous polyps.Methods:Non-magnifying narrow band imaging (NBI) polyp images obtained from Endoscopy Center of Renmin Hospital, Wuhan University were divided into three datasets. Dataset 1 (2 699 adenomatous and 1 846 non-adenomatous non-magnifying NBI polyp images from January 2018 to October 2020) was used for model training and validation of the diagnosis system. Dataset 2 (288 adenomatous and 210 non-adenomatous non-magnifying NBI polyp images from January 2018 to October 2020) was used to compare the accuracy of polyp classification between the system and endoscopists. At the same time, the accuracy of 4 trainees in polyp classification with and without the assistance of this system was compared. Dataset 3 (203 adenomatous and 141 non-adenomatous non-magnifying NBI polyp images from November 2020 to January 2021) was used to prospectively test the system.Results:The accuracy of the system in polyp classification was 90.16% (449/498) in dataset 2, superior to that of endoscopists. With the assistance of the system, the accuracy of colorectal polyp diagnosis was significantly improved. In the prospective study, the accuracy of the system was 89.53% (308/344).Conclusion:The colorectal polyp classification system based on deep learning can significantly improve the accuracy of trainees in polyp classification.

16.
Chinese Journal of Digestive Endoscopy ; (12): 783-788, 2021.
Article in Chinese | WPRIM | ID: wpr-912173

ABSTRACT

Objective:To assess the influence of an artificial intelligence (AI) -assisted diagnosis system on the performance of endoscopists in diagnosing gastric cancer by magnifying narrow banding imaging (M-NBI).Methods:M-NBI images of early gastric cancer (EGC) and non-gastric cancer from Renmin Hospital of Wuhan University from March 2017 to January 2020 and public datasets were collected, among which 4 667 images (1 950 images of EGC and 2 717 of non-gastric cancer)were included in the training set and 1 539 images (483 images of EGC and 1 056 of non-gastric cancer) composed a test set. The model was trained using deep learning technique. One hundred M-NBI videos from Beijing Cancer Hospital and Renmin Hospital of Wuhan University between 9 June 2020 and 17 November 2020 were prospectively collected as a video test set, 38 of gastric cancer and 62 of non-gastric cancer. Four endoscopists from four other hospitals participated in the study, diagnosing the video test twice, with and without AI. The influence of the system on endoscopists′ performance was assessed.Results:Without AI assistance, accuracy, sensitivity, and specificity of endoscopists′ diagnosis of gastric cancer were 81.00%±4.30%, 71.05%±9.67%, and 87.10%±10.88%, respectively. With AI assistance, accuracy, sensitivity and specificity of diagnosis were 86.50%±2.06%, 84.87%±11.07%, and 87.50%±4.47%, respectively. Diagnostic accuracy ( P=0.302) and sensitivity ( P=0.180) of endoscopists with AI assistance were improved compared with those without. Accuracy, sensitivity and specificity of AI in identifying gastric cancer in the video test set were 88.00% (88/100), 97.37% (37/38), and 82.26% (51/62), respectively. Sensitivity of AI was higher than that of the average of endoscopists ( P=0.002). Conclusion:AI-assisted diagnosis system is an effective tool to assist diagnosis of gastric cancer in M-NBI, which can improve the diagnostic ability of endoscopists. It can also remind endoscopists of high-risk areas in real time to reduce the probability of missed diagnosis.

17.
Chinese Journal of Digestive Endoscopy ; (12): 778-782, 2021.
Article in Chinese | WPRIM | ID: wpr-912172

ABSTRACT

Objective:To develop an endoscopic ultrasonography (EUS) station recognition and pancreatic segmentation system based on deep learning and to validate its efficacy.Methods:Data of 269 EUS procedures were retrospectively collected from Renmin Hospital of Wuhan University between December 2016 and December 2019, and were divided into 3 datasets: (1)Dataset A of 205 procedures for model training containing 16 305 images for classification training and 1 953 images for segmentation training; (2)Dataset B of 44 procedures for model testing containing 1 606 images for classification testing and 480 images for segmentation testing; (3) Dataset C of 20 procedures with 150 images for comparing the performance between models and endoscopists. EUS experts (with more than 10 years of experience) A and B classified and labeled all images of dataset A, B and C through discussion, and the results were used as the gold standard. EUS expert C and senior EUS endoscopists (with more than 5 years of experience) D and E classified and labeled the images in dataset C, and the results were used for comparison with model. The main outcomes included accuracy of classification, Dice (F1 score) of segmentation and Cohen Kappa coefficient of consistency analysis.Results:In test dataset B, the model achieved a mean accuracy of 94.1% in classification. The mean Dice of pancreatic and vascular segmentation were 0.826 and 0.841 respectively. In dataset C, the classification accuracy of the model reached 90.0%. The classification accuracy of expert C, senior endoscopist D and E were 89.3%, 88.7% and 87.3%, respectively. The Dice of pancreatic and vascular segmentation in the model were 0.740 and 0.859, 0.708 and 0.778 for expert C, 0.747 and 0.875 for senior endoscopist D, and 0.774 and 0.789 for senior endoscopist E. The model was comparable to the expert level.Consistency analysis showed that there was high consistency between the model and endoscopists (the Kappa coefficient was 0.823 between model and expert C, 0.840 between model and senior endoscopist D, and 0.799 between model and senior endoscopist E).Conclusion:EUS station classification and pancreatic segmentation system based on deep learning can be used for quality control of pancreatic EUS, with a comparable performance of classification and segmentation to that of EUS experts.

18.
Chinese Journal of Digestive Endoscopy ; (12): 107-114, 2021.
Article in Chinese | WPRIM | ID: wpr-885700

ABSTRACT

Objective:To construct an intelligent performance measurement system of gastrointestinal endoscopy and to analyze its value for endoscopic quality improvement.Methods:The intelligent gastrointestinal endoscopy performance measurement system was developed by using the deep convolutional neural network (DCNN) and deep reinforcement learning, based on the Digital Imaging and Communications in Medicine. Images were acquired of patients undergoing gastrointestinal endoscopy at Digestive Endoscopy Center of Renmin Hospital of Wuhan University from December 2016 to October 2018. The system applied cecum recognition model (DCNN1), images in vitro and in vivo recognition model (DCNN2), and identification model at 26 gastric sites (DCNN3) to monitor indices such as cecal intubation rate, colonoscopic withdrawal time, gastroscopic inspection time, and gastroscopic coverage. Images of 83 gastroscopies and 205 colonoscopies acquired at Digestive Endoscopy Center of Renmin Hospital of Wuhan University from March to November 2019 were randomly selected to examine the effectiveness of the system. Results:The intelligent gastrointestinal endoscopy performance measurement system consisted of quality analysis of both gastroscopy and colonoscopy, including all indices, and could be generated automatically at any time. The accuracy for cecal intubation rate, colonoscopic withdrawal time, gastroscopic inspection time, and gastroscopic coverage were 92.5% (172/186), 91.7% (188/205), 100.0% (83/83), 89.3% (1 928/2 158), respectively.Conclusion:The intelligent performance measurement system for gastrointestinal endoscopy can be recommended for the quality control of gastrointestinal endoscopy, from which endoscopists can get feedback and improve the quality of gastrointestinal endoscopy.

19.
Chinese Journal of Digestive Endoscopy ; (12): 584-590, 2020.
Article in Chinese | WPRIM | ID: wpr-871425

ABSTRACT

Objective:To establish a deep convolutional neural network (DCNN) model based on YOLO and ResNet algorithm for automatic detection of colorectal polyps and to test its function.Methods:Colonoscopy images and videos collected from the database of Digestive Endoscopy Center of Renmin Hospital of Wuhan University from January 2018 to March 2019 were divided into three databases (database 1, 3, 4). The public database CVC-ClinicDB (composed of 612 polyp images extracted from 29 colonoscopy videos provided by Barcelona Hospital, Spain) was used as the database 2. Database 1 (4 700 colonoscopy images from January 2018 to November 2018, including 3 700 intestinal polyp images and 1 000 non-polyp images) was used for establishing training and verifying the DCNN model. Database 2 (CVC-ClinicDB) and database 3 (720 colonoscopy images from January 2019 to March 2019, including 320 intestinal polyp images and 400 non-polyp images) were used for testing the DCNN model on image detection. Database 4 (15 colonoscopy videos in December 2019, containing 33 polyps) was used for testing the DCNN model on video detection. The sensitivity, specificity, accuracy and false positive rate of the DCNN model for detecting intestinal polyps were calculated.Results:The sensitivity of the DCNN model for detecting intestinal polyps in database 2 was 93.19% (602/646). In database 3, the DCNN model showed the accuracy of 95.00% (684/720), sensitivity of 98.13% (314/320), specificity of 92.50% (370/400), and false positive rate of 7.50% (30/400) for detecting intestinal polyps. In database 4, the DCNN model achieved a per-polyp-sensitivity of 100.00% (33/33), a per-image-accuracy of 96.29% (133 840/138 998), a per-image-sensitivity of 90.24% (4 066/4 506), a per-image-specificity of 96.49% (129 774/134 492), and a per-image-false positive rate of 3.51% (4 718/134 492).Conclusion:The DCNN model constructed in the study has a high sensitivity and specificity for automatic detection of colorectal polyps both in the colonoscopy images and videos, has a low false positive rate in the videos, and has the potential to assist endoscopists in diagnosis of colorectal polyps.

20.
Chinese Journal of Digestive Endoscopy ; (12): 476-480, 2020.
Article in Chinese | WPRIM | ID: wpr-871422

ABSTRACT

Objective:To construct an artificial intelligence-assisted diagnosis system to detect gastric ulcer lesions and identify benign and malignant gastric ulcers automatically.Methods:A total of 1 885 endoscopy images were collected from November 2016 to April 2019 in the Digestive Endoscopy Center of Renmin Hospital of Wuhan University. Among them, 636 were normal images, 630 were with benign gastric ulcers, and 619 were with malignant gastric ulcers. A total of 1 735 images belonged to training data set and 150 images were used for validation. These images were input into the Res-net50 model based on the fastai framework, the Res-net50 model based on the Keras framework, and the VGG-16 model based on the Keras framework respectively. Three separate binary classification models of normal gastric mucosa and benign ulcers, normal gastric mucosa and malignant ulcers, and benign and malignant ulcers were constructed.Results:The VGG-16 model showed the best ability of classification. The accuracy of the validation set was 98.0%, 98.0% and 85.0%, respectively, for distinguishing normal gastric mucosa from benign ulcers, normal gastric mucosa from malignant ulcers, and benign ulcers from malignant ulcers.Conclusion:The artificial intelligence-assisted diagnosis system obtained in this study shows noteworthy ability of detection of ulcerous lesions, and is expected to be used in clinical to assist doctors to detect ulcer and identify benign and malignant ulcers.

SELECTION OF CITATIONS
SEARCH DETAIL